2 research outputs found

    Mixed-initiative co-creativity

    Get PDF
    Creating and designing with a machine: do we merely create together (co-create) or can a machine truly foster our creativity as human creators? When does such co-creation foster the co-creativity of both humans and machines? This paper investigates the simultaneous and/or iterative process of human and computational creators in a mixed-initiative fashion within the context of game design and attempts to draw from both theory and praxis towards answering the above questions. For this purpose, we first discuss the strong links between mixed-initiative co-creation and theories of human and computational creativity. We then introduce an assessment methodology of mixed-initiative co-creativity and, as a proof of concept, evaluate Sentient Sketchbook as a co-creation tool for game design. Core findings suggest that tools such as Sentient Sketchbook are not mere game authoring systems or mere enablers of creation but, instead, foster human creativity and realize mixed-initiative co-creativity.peer-reviewe

    Generative agents for player decision modeling in games

    Get PDF
    This paper presents a method for modeling player decision making through the use of agents as AI-driven personas. The paper argues that artificial agents, as generative player models, have properties that allow them to be used as psychometrically valid, abstract simulations of a human player’s internal decision making processes. Such agents can then be used to interpret human decision making, as personas and playtesting tools in the game design process, as baselines for adapting agents to mimic classes of human players, or as believable, human-like opponents. This argument is explored in a crowdsourced decision making experiment, in which the decisions of human players are recorded in a small-scale dungeon themed puzzle game. Human decisions are compared to the decisions of a number of a priori defined “archetypical” agent-personas, and the humans are characterized by their likeness to or divergence from these. Essentially, at each step the action of the human is compared to what actions a number of reinforcement-learned agents would have taken in the same situation, where each agent is trained using a different reward scheme. Finally, extensions are outlined for adapting the agents to represent sub-classes found in the human decision making traces.peer-reviewe
    corecore